Control System for Talking Articulatory Movement
نویسندگان
چکیده
The ultimate goal of our study is to create a new speech production system, in which an anthropomorphic hardware talking robot is handled so as to imitate human articulatory movement. We have developed a software motion simulator, which generates control parameter sequences for the talking robot Waseda Talker No. 2 (WT-2) using the trajectories of human articulatory organs during continuous utterances measured by an electromagnetic articulograph. This paper mainly describes the motion simulator. In addition to its main function as a generator of control parameters for WT-2, the motion simulator also simulates the resultant acoustic characteristics related to WT-2's vocal tract shape at each instance during the motion. The hardware structure of WT-2 is also described briefly. This comprehensive approach will enable us to study speech production using the features of the vocal tract shape as speech motor tasks, instead of using acoustic features.
منابع مشابه
Speech robot mimicking human articulatory motion
We have developed a mechanical talking robot, Waseda Talker No. 7 Refined II, to study the human speech mechanism. The conventional control method for this robot is based on a concatenation rule of the phoneme-specific articulatory configurations. With this method, the speech mechanism of the robot is much slower than is required for human speech, because the robot requires momentary movement o...
متن کاملVirtual Talking Heads and audiovisual articulatory synthesis
Our approach to audiovisual articulatory synthesis involves the development of Virtual Talking Heads that integrate the articulatory, aerodynamic and acoustic phenomena underlying speech production. Specifically, these Talking Heads are faithful clones of the speakers whose data the various models are based on. Our contribution presents some of the results achieved at ICP in this domain: 3D oro...
متن کاملEvaluation of the Expressivity of a Swedish Talking Head in the Context of Human-machine Interaction
This paper describes a first attempt at synthesis and evaluation of expressive visual articulation using an MPEG-4 based virtual talking head. The synthesis is data-driven, trained on a corpus of emotional speech recorded using optical motion capture. Each emotion is modelled separately using principal component analysis and a parametric coarticulation model. In order to evaluate the expressivi...
متن کاملMeasurements of articulatory variation and communicative signals in expressive speech
This paper describes a method for acquiring data for facial movement analysis and implementation in an animated talking head. We will also show preliminary data on how a number of articulatory and facial parameters for some Swedish vowels vary under the influence of expressiveness in speech and gestures. Primarily we have been concerned in expressive gestures and emotions conveying information ...
متن کاملMOTHER: a new generation of talking heads providing a flexible articulatory control for video-realistic speech animation
This article presents the first version of a talking head, called MOTHER (MOrphable Talking Head for Enhanced Reality), based on an articulatory model describing the degrees-offreedom of visible (lips, cheeks...) but also partially or indirectly visible (jaw, tongue...) speech articulators. Skin details are rendered using texture mapping/blending techniques. We illustrate here the flexibility o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2002